• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Modeling Content Creator Incentives on Algorithm-Curated Platforms

May 31, 2023

🔬 Research Summary by Jiri Hron, a PhD student at the University of Cambridge and worked as a student researcher at Google Brain for most of his PhD.

[Original paper by Jiri Hron, Karl Krauth, Michael I. Jordan, Niki Kilbertus, and Sarah Dean]


Overview: While content creators on online platforms compete for user attention, their exposure crucially depends on algorithmic choices made by the platform. In this paper, we formalize exposure games, a model of the incentives induced by recommender systems. We prove that seemingly innocuous algorithmic choices in modern recommenders may affect incentivized creator behaviors in significant and unexpected ways. We develop techniques to numerically find equilibria in exposure games and leverage them for pre-deployment audits of recommender systems. 


Introduction

In 2018, Jonah Peretti (CEO of Buzzfeed) raised the alarm when the Facebook newsfeed started boosting junk and divisive content. In Poland, the same update caused politicians to increase negative messaging. Tailoring content to algorithms is not unique to social media. For example, some search engine optimization (SEO) professionals specialize in managing the impacts of Google Search updates. While motivations for adapting content range from economic to socio-political, they often translate into the same operative goal: exposure maximization.

We formalize a game-theoretic model of how a platform’s recommendation system shapes the incentives of content creators, which we call exposure games. By developing tools to find equilibria in exposure games, we show that subtle algorithmic choices may significantly and unexpectedly affect incentivized creator behaviors. These tools can also be used for pre-deployment audits of recommendation systems on such platforms.

Key Insights

How recommender systems induce incentives for content creators

Consider the case of content creators on Youtube and the recommender system that displays “videos to watch next.” Since the revenue of video creators is proportional to their view numbers, they are incentivized to maximize exposure, i.e., tailor their content to be ranked highly in the “to watch next” column. In our setting, we assume there is a fixed recommender system trained on past data and a fixed population of users.  This induces a demand distribution, representing the typical platform traffic over a predefined period.

We study how the algorithmic choices of the recommender system may affect the strategies of exposure-maximizing content creators. We propose an incentive-based behavior model called an exposure game, where creators compete for the finite user attention pool by tailoring content to the given algorithm. When creators act strategically, a steady state—Nash equilibrium (NE)—may be reached, with no one able to improve their exposure unilaterally. The content produced in a Nash equilibrium can thus be interpreted as what the algorithm implicitly incentivizes.

How to model an exposure game

To abstract from the specific content modality (videos, images, text, etc.), we focus on algorithms that model user preferences as an inner product of user and item embeddings (numerical vectors representing the content) and recommend items based on the estimated preference. The expected exposure of a creator is the expected number of interactions under the user demand distribution and the rankings provided by the recommender system. An exposure game consists of a finite number of creators trying to produce content to maximize their exposure. Creators choose to produce content by selecting its embedding vector, rationally adapting to the user demand distribution and the precise workings of the recommender system.

Common factorization-based algorithms also have a non-negative temperature parameter τ, which controls the spread of exposure probabilities over the top-scoring items. This parameter can be thought of as controlling the level of “exploration” performed by the recommender: when τ is zero, the top-ranking content is exposed with certainty; when τ is greater than zero, randomness is added such that all contents have a non-zero (albeit potentially small) probability of being exposed. No assumptions are made on how the embeddings are obtained. Thus all our results apply equally to classical matrix factorization and deep learning-based systems.

Existence of equilibria and the effect of exploration

First, we theoretically study the existence of different types of equilibria in exposure games, where each producer is satisfied either with a single strategy vector (pure Nash equilibrium) or a distribution over strategies (mixed Nash equilibrium). Mixed strategies can be thought of as creating multiple items and distributing time or budget over them. When no equilibrium exists, creators may persistently oscillate in competition between strategies. The key results are that at least one mixed Nash equilibrium exists in every exposure game, whereas pure Nash equilibria need not exist in the Ď„ = 0 or the Ď„ > 0 case.

However, when we relax the concept of Nash equilibria to situations in which no player can improve their exposure by at least some fixed non-zero amount ε, the situation changes: the number and existence of such equilibria critically depend on the temperature parameter τ. For heavily exploring recommenders, all creators are incentivized to uniformly produce homogeneous content, whereas low exploration levels may lead to the non-existence of equilibria.

This may contradict the intuition that more exploration should lead to greater content diversity due to the higher exposure of niche content. One way to understand this result is the tension between randomization and the ability of niche creators to reach their audience: creators may be discouraged from creating niche content when the algorithm is exploring too much (Ď„ high) and encouraged to mercilessly seek and protect their niche when the algorithm performs little exploration (Ď„ low). When the algorithm captures user preferences well, exploration is typically thought of as having a negative impact on the user experience through an immediate reduction in the quality of service. However, the above results show secondary long-term effects.

Pre-deployment audits of strategic creator incentives

We also demonstrate how to utilize exposure games for pre-deployment audits of different rating models on real-world datasets. Based on data from MovieLens and LastFM, all creators cluster at the same strategy for growing Ď„. We can corroborate on the MovieLens data that there is an incentive to target content towards male users, presumably since 71% of users are male. In our pre-deployment audits, we can also analyze whether a given algorithm (de)incentivizes content by a particular creator group, which can help limit future harm and discrimination.

Between the lines

From social media and streaming to Google Search, many interact with recommender and information retrieval systems daily. While the core algorithms were developed and analyzed years ago, the socio-economic context in which they operate received comparatively little attention in the technical computer science literature.

Our producer model has several limitations, from assuming rationality, complete information, and total control, to taking the skill set of each producer to be the same, their exposure to be linear in full exposure, and ignoring algorithmic diversification of recommendations. We also consider the attention pool as fixed and finite, neglecting the problematic reality of the modern attention economy, where online platforms constantly struggle to increase their user numbers and daily usage. While the formalization and study of more realistic producer models is certainly an important direction for future work, a critical hindrance to empirical evaluation is the lack of academic access to the almost exclusively privately owned platforms.

Therefore, increased transparency will be an important step to incorporate independent pre-deployment audits as a practical addition to the algorithm auditing toolbox. We hope our research enriches the debate about online platforms’ role in our society and economy.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • The GPTJudge: Justice in a Generative AI World

    The GPTJudge: Justice in a Generative AI World

  • Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms

    Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms

  • Toward an Ethics of AI Belief

    Toward an Ethics of AI Belief

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

    The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

  • Exploring Clusters of Research in Three Areas of AI Safety

    Exploring Clusters of Research in Three Areas of AI Safety

  • Using attention methods to predict judicial outcomes

    Using attention methods to predict judicial outcomes

  • Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act

    Regulating AI to ensure Fundamental Human Rights: reflections from the Grand Challenge EU AI Act

  • Montreal AI Ethics Institute Hosts a TechAIDE CafĂ© Session

    Montreal AI Ethics Institute Hosts a TechAIDE Café Session

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.